Extensions to McDiarmid’s inequality when differences are bounded with high probability

نویسنده

  • Samuel Kutin
چکیده

The method of independent bounded differences (McDiarmid, 1989) gives largedeviation concentration bounds for multivariate functions in terms of the maximum effect that changing one coordinate of the input can have on the output. This method has been widely used in combinatorial applications, and in learning theory. In some recent applications to the theory of algorithmic stability (Kutin and Niyogi, 2002), we need to consider the case where changing one coordinate of the input usually leads to a small change in the output, but not always. We prove two extensions to McDiarmid’s inequality. The first applies when, for most inputs, any small change leads to a small change in the output. The second applies when, for a randomly selected input and a random one-coordinate change, the change in the output is usually small.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An extension of McDiarmid's inequality

We derive an extension of McDiarmid’s inequality for functions f with bounded differences on a high probability set Y (instead of almost surely). The behavior of f outside Y may be arbitrary. The proof is short and elementary, and relies on an extension argument similar to Kirszbraun’s theorem [4].

متن کامل

On the Method of Typical Bounded Differences

Concentration inequalities are fundamental tools in probabilistic combinatorics and theoretical computer science for proving that random functions are near their means. Of particular importance is the case where f(X) is a function of independent random variables X = (X1, . . . , Xn). Here the well known bounded differences inequality (also called McDiarmid’s or Hoeffding–Azuma inequality) estab...

متن کامل

The University of Chicago Algorithmic Stability and Ensemble-based Learning a Dissertation Submitted to the Faculty of the Division of the Physical Sciences in Candidacy for the Degree of Doctor of Philosophy Department of Computer Science by Samuel Kutin

We explore two themes in formal learning theory. We begin with a detailed, general study of the relationship between the generalization error and stability of learning algorithms. We then examine ensemble-based learning from the points of view of stability, decorrelation, and threshold complexity. A central problem of learning theory is bounding generalization error. Most such bounds have been ...

متن کامل

Concentration in unbounded metric spaces and algorithmic stability

We prove an extension of McDiarmid’s inequality for metric spaces with unbounded diameter. To this end, we introduce the notion of the subgaussian diameter, which is a distribution-dependent refinement of the metric diameter. Our technique provides an alternative approach to that of Kutin and Niyogi’s method of weakly difference-bounded functions, and yields nontrivial, dimension-free results i...

متن کامل

Uncertainty quantification via codimension-one partitioning

We consider uncertainty quantification in the context of certification, i.e. showing that that the probability of some “failure” event is acceptably small. In this paper, we derive a new method for rigorous uncertainty quantification and conservative certification by combining McDiarmid’s inequality with input domain partitioning and a new concentration-ofmeasure inequality. We show that arbitr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002